On the Linear Convergence of the Proximal Gradient Method for Trace Norm Regularization
نویسندگان
چکیده
Motivated by various applications in machine learning, the problem of minimizing a convex smooth loss function with trace norm regularization has received much attention lately. Currently, a popular method for solving such problem is the proximal gradient method (PGM), which is known to have a sublinear rate of convergence. In this paper, we show that for a large class of loss functions, the convergence rate of the PGM is in fact linear. Our result is established without any strong convexity assumption on the loss function. A key ingredient in our proof is a new Lipschitzian error bound for the aforementioned trace norm–regularized problem, which may be of independent interest.
منابع مشابه
Online Learning for Classification of Low-rank Representation Features and Its Applications in Audio Segment Classification
In this paper, a novel framework based on trace norm minimization for audio segment is proposed. In this framework, both the feature extraction and classification are obtained by solving corresponding convex optimization problem with trace norm regularization. For feature extraction, robust principle component analysis (robust PCA) via minimization a combination of the nuclear norm and the l1-n...
متن کاملDecomposable norm minimization with proximal-gradient homotopy algorithm
We study the convergence rate of the proximal-gradient homotopy algorithm applied to normregularized linear least squares problems, for a general class of norms. The homotopy algorithm reduces the regularization parameter in a series of steps, and uses a proximal-gradient algorithm to solve the problem at each step. Proximal-gradient algorithm has a linear rate of convergence given that the obj...
متن کاملAn inexact alternating direction method with SQP regularization for the structured variational inequalities
In this paper, we propose an inexact alternating direction method with square quadratic proximal (SQP) regularization for the structured variational inequalities. The predictor is obtained via solving SQP system approximately under significantly relaxed accuracy criterion and the new iterate is computed directly by an explicit formula derived from the original SQP method. Under appropriat...
متن کاملAdaptive Accelerated Gradient Converging Methods under Hölderian Error Bound Condition
Recent studies have shown that proximal gradient (PG) method and accelerated gradient method (APG) with restarting can enjoy a linear convergence under a weaker condition than strong convexity, namely a quadratic growth condition (QGC). However, the faster convergence of restarting APG method relies on the potentially unknown constant in QGC to appropriately restart APG, which restricts its app...
متن کاملAdaptive Accelerated Gradient Converging Method under H\"{o}lderian Error Bound Condition
Recent studies have shown that proximal gradient (PG) method and accelerated gradient method (APG) with restarting can enjoy a linear convergence under a weaker condition than strong convexity, namely a quadratic growth condition (QGC). However, the faster convergence of restarting APG method relies on the potentially unknown constant in QGC to appropriately restart APG, which restricts its app...
متن کامل